13 research outputs found

    Transformer-based Dual-domain Network for Few-view Dedicated Cardiac SPECT Image Reconstructions

    Full text link
    Cardiovascular disease (CVD) is the leading cause of death worldwide, and myocardial perfusion imaging using SPECT has been widely used in the diagnosis of CVDs. The GE 530/570c dedicated cardiac SPECT scanners adopt a stationary geometry to simultaneously acquire 19 projections to increase sensitivity and achieve dynamic imaging. However, the limited amount of angular sampling negatively affects image quality. Deep learning methods can be implemented to produce higher-quality images from stationary data. This is essentially a few-view imaging problem. In this work, we propose a novel 3D transformer-based dual-domain network, called TIP-Net, for high-quality 3D cardiac SPECT image reconstructions. Our method aims to first reconstruct 3D cardiac SPECT images directly from projection data without the iterative reconstruction process by proposing a customized projection-to-image domain transformer. Then, given its reconstruction output and the original few-view reconstruction, we further refine the reconstruction using an image-domain reconstruction network. Validated by cardiac catheterization images, diagnostic interpretations from nuclear cardiologists, and defect size quantified by an FDA 510(k)-cleared clinical software, our method produced images with higher cardiac defect contrast on human studies compared with previous baseline methods, potentially enabling high-quality defect visualization using stationary few-view dedicated cardiac SPECT scanners.Comment: Early accepted by MICCAI 2023 in Vancouver, Canad

    Fast-MC-PET: A Novel Deep Learning-aided Motion Correction and Reconstruction Framework for Accelerated PET

    Full text link
    Patient motion during PET is inevitable. Its long acquisition time not only increases the motion and the associated artifacts but also the patient's discomfort, thus PET acceleration is desirable. However, accelerating PET acquisition will result in reconstructed images with low SNR, and the image quality will still be degraded by motion-induced artifacts. Most of the previous PET motion correction methods are motion type specific that require motion modeling, thus may fail when multiple types of motion present together. Also, those methods are customized for standard long acquisition and could not be directly applied to accelerated PET. To this end, modeling-free universal motion correction reconstruction for accelerated PET is still highly under-explored. In this work, we propose a novel deep learning-aided motion correction and reconstruction framework for accelerated PET, called Fast-MC-PET. Our framework consists of a universal motion correction (UMC) and a short-to-long acquisition reconstruction (SL-Reon) module. The UMC enables modeling-free motion correction by estimating quasi-continuous motion from ultra-short frame reconstructions and using this information for motion-compensated reconstruction. Then, the SL-Recon converts the accelerated UMC image with low counts to a high-quality image with high counts for our final reconstruction output. Our experimental results on human studies show that our Fast-MC-PET can enable 7-fold acceleration and use only 2 minutes acquisition to generate high-quality reconstruction images that outperform/match previous motion correction reconstruction methods using standard 15 minutes long acquisition data.Comment: Accepted at Information Processing in Medical Imaging (IPMI 2023

    FedFTN: Personalized Federated Learning with Deep Feature Transformation Network for Multi-institutional Low-count PET Denoising

    Full text link
    Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.Comment: 13 pages, 6 figures, Accepted at Medical Image Analysis Journal (MedIA

    A Fast Lock for Explicit Message Passing Architectures

    No full text

    The Effect of CRITSIMA Process Parameters on the Microstructure Evolution and Element Segregation of Semi-Solid CuSn10P1 Alloy Billet

    No full text
    In this paper, based on the as-cast CuSn10P1 alloy. Semi-solid CuSn10P1 alloy billet was prepared by cold-rolled isothermal treatment strain-induced melting activation (CRITSIMA). The effects of cold-rolling reduction, isothermal temperature, and isothermal time on the microstructure of semi-solid copper alloy billet were studied by metallographic microscope and Image-Pro Plus software. The changes of primary elements in as-cast and semi-solid microstructure were analyzed briefly by a scanning electron microscope (SEM) equipped with energy dispersive spectroscopy (EDS). The results show that with the increase of cold rolling reduction, the average grain diameter of semi-solid microstructure decreases gradually, the average grain roundness increases first and then decreases, and the liquid fraction of the microstructure remains unchanged. During semi-solid isothermal treatment, with the increase of isothermal temperature and the extension of isothermal time, the average grain diameter increases gradually, the average grain roundness increases first and then decreases, and the liquid fraction increases gradually. When cold rolling reduction is 30%, isothermal temperature is 900 ℃, and isothermal time is 20 min, a better microstructure can be obtained. The average grain diameter, average grain roundness, and liquid fraction of semi-solid alloy billet are 66.45 μm, 0.71, and 12.78%, respectively. Sn and P diffuse from the intergranular liquid to the grain inside during the isothermal treatment from as-cast to semi-solid

    Photonic Microwave Up-Conversion Link With Compensation of Chromatic Dispersion-Induced Power Fading

    No full text

    Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT

    Full text link
    Single-photon emission computed tomography (SPECT) is a widely applied imaging approach for diagnosis of coronary artery diseases. Attenuation maps (u-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve diagnostic accuracy of cardiac SPECT. However, SPECT and CT are obtained sequentially in clinical practice, which potentially induces misregistration between the two scans. Convolutional neural networks (CNN) are powerful tools for medical image registration. Previous CNN-based methods for cross-modality registration either directly concatenated two input modalities as an early feature fusion or extracted image features using two separate CNN modules for a late fusion. These methods do not fully extract or fuse the cross-modality information. Besides, deep-learning-based rigid registration of cardiac SPECT and CT-derived u-maps has not been investigated before. In this paper, we propose a Dual-Branch Squeeze-Fusion-Excitation (DuSFE) module for the registration of cardiac SPECT and CT-derived u-maps. DuSFE fuses the knowledge from multiple modalities to recalibrate both channel-wise and spatial features for each modality. DuSFE can be embedded at multiple convolutional layers to enable feature fusion at different spatial dimensions. Our studies using clinical data demonstrated that a network embedded with DuSFE generated substantial lower registration errors and therefore more accurate AC SPECT images than previous methods.Comment: 10 pages, 4 figures, accepted at MICCAI 202

    High-Efficiency Extraction and Modification on the Coal Liquefaction Residue Using Supercritical Fluid with Different Types of Solvents

    No full text
    This study aims to systematically illustrate the mechanism of supercritical fluid extraction (SFE) and modification on the coal liquefaction residue (CLR) and to identify the evolution and characteristics of the mesophase produced from the carbonization of SFE extracts. Results show that the extraction performance of SFE and the properties of the mesophase precursor were strongly dependent upon the selection of operating conditions and solvents. The SFE process using acetone and isopropanol presented excellent extraction performance, owning to the effect of solvent polarity on the degradation or supercritical reaction, achieving their respective CLR extraction yields of 45.85 and 30.12 wt %, while an extraction yield of 53.78 wt % was attained when using benzene, benefiting from its strong affinity to condensed aromatic hydrocarbons. More practically, quinoline-insoluble (QI) fraction decreased from 48.84 to 1.13 wt % after SFE processing, which significantly upgraded the quality of the mesophase precursor. To an extent, supercritical acetone exhibited strong reaction activity during extraction because its extract contained a higher amount of the hexane-soluble (HS) fraction, which could optimize the molecular weight distribution of the mesophase precursor. The well-developed bulk mesophase in the carbonized SFE extracts was remarkably improved in comparison to raw CLR. Presumably, the SFE extract was favorable to forming 100% mesophase, where dominated flow textures were observed

    FedFTN: Personalized federated learning with deep feature transformation network for multi-institutional low-count PET denoising.

    No full text
    Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions
    corecore